Physiological Measurement
○ IOP Publishing
Preprints posted in the last 7 days, ranked by how well they match Physiological Measurement's content profile, based on 12 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit.
Lin, R.; Halfwerk, F. R.; Donker, D. W.; Tertoolen, J.; van der Pas, V. R.; Laverman, G. D.; Wang, Y.
Show abstract
Objective: Skin sympathetic nerve activity (SKNA) has emerged as a promising non-invasive surrogate measure of sympathetic drive, but its relevant physiological characteristics remain ill-defined. This observational study aims to investigate its regulatory patterns during rest and Valsalva maneuver (VM) in healthy participants. Method: Using a two-layer strategy integrating signal analysis and physiological modelling, we analyzed data recorded from 41 subjects performing repeated VMs. The observational layer includes time-domain feature comparisons using linear mixed-effect models, and time-varying spectral coherence analysis. The mechanistic layer proposes a mathematical model to investigate whether baroreflex and respiratory modulation are sufficient to reproduce the observed HR and average SKNA (aSKNA) dynamics. Main Results: Mean integrated SKNA (iSKNA) showed more significant change than HRV for VM induced effects. We also found mean iSKNA increase during VM varies with BMI and sex. The coherence analysis indicated that iSKNA strongly synchronized with EDR under resting conditions. The proposed model successfully reproduced main characteristics of aSKNA dynamics, yielding a high median Pearson correlation coefficient of 0.80 ([Q1, Q3] = [0.60, 0.91]). In contrast, HR dynamics were only partially captured, with a median PCC of 0.37 ([Q1, Q3] = [0.16, 0.55]). These results likely suggest SKNA provides a more direct representation of sympathetic burst dynamics during VM in healthy subjects. Significance: This study provides convergent evidence that SKNA reflects known autonomic regulatory influences in healthy subjects. These findings strengthen the physiological interpretability of SKNA while clarifying its appropriate use as a practical biomarker of sympathetic function.
Mizutani, N.; Nishizawa, S.; Enomoto, Y.; OKAMOTO, H.; Baba, R.; Misawa, A.; Takahashi, K.; Tada, Y.; LIN, Y.-C.; Shih, W.-P.
Show abstract
While the need for continuous blood pressure (BP) monitoring in Japan is high, there are no commercially available cuffless devices for personal daily monitoring use. Fingertip-based sensors are a promising alternative as they eliminate the discomfort of repeated cuff inflation. However, their reliability during winter has been a major technical limitation due to cold-induced peripheral vasoconstriction. This study aimed to address this issue by validating a novel fingertip-based continuous BP monitor used by exercising adults during summer and winter. Eleven community-dwelling older adults (mean age, 73.1 {+/-} 8.8 years) were included in this seasonal comparative study. During exercise, we compared a personal fingertip-based continuous monitor (ArteVu) with a standard oscillometric cuff device (Omron) in summer (mean, 26.5{degrees}C) and winter (mean, 7.4{degrees}C). The study also evaluated the device's accuracy during exercise-induced BP fluctuations and seasonal environmental changes. Awareness of the participants regarding BP management was also assessed using questionnaires. There were strong correlations for systolic BP (SBP) between summer and winter (r = 0.93 in summer; r = 0.88 in winter). Although the mean difference for the SBP was higher in winter than in summer (3.1 {+/-} 11.2 mmHg vs. 0.2 {+/-} 9.4 mmHg), the values remained within a clinically acceptable range for personal monitoring. Notably, 72.7% of participants reported that the ease of using the fingertip-based device significantly increased their awareness and motivation for daily BP management. This study confirms the feasibility of cuffless fingertip-based continuous BP monitoring across different seasons, including in winter. By overcoming the seasonal limitations, this device fills a critical gap in the Japanese health-monitoring market. Our findings support the development of smaller and more portable models, representing a shift from traditional "snapshot" cuff measurements to continuous and integrated lifestyle monitoring for older adults.
Zeng, A.; O'Hagan, E. T.; Trivedi, R.; Ford, B.; Perry, T.; Turnbull, S.; Sheahen, B.; Mulley, J.; Sedhom, M.; Choy, C.; Biasi, A.; Walters, S.; Miranda, J. J.; Chow, C. K.; Laranjo, L.
Show abstract
Background: Continuous adhesive patch electrocardiographic (ECG) wearables are increasingly prescribed. Patient experience with these devices can influence adherence, but research in this area is limited. This study aimed to explore the perceptions and experiences of patients receiving wearable cardiac monitoring technology as part of their routine care through the lens of treatment burden. Methods: This was a qualitative study with semi-structured phone interviews conducted between February and May 2024. We recruited participants from primary care and outpatient clinics using maximum variation sampling to ensure diversity in sex, ethnicity, and education levels. Interviews were audio-recorded, transcribed, and analysed using reflexive thematic analysis. Results: Sixteen participants (mean age 51 years, 63% female) were interviewed (average duration: 33 minutes). Three themes were developed: 1) ?Experience using the device: Burden vs Ease of Use?, which captured participants? perceptions of how easily they could integrate the device in their daily lives; 2) ?Individual variability in responses to ECG self-monitoring? covered participants? emotional and cognitive response to knowing their heart rhythm was monitored; and 3) ?The care process shapes patient experiences? reflected support preferences during the set-up and monitoring period and the uncertainty regarding timely clinical and device feedback. Conclusions: Patients valued cardiac wearables for facilitating diagnosis and felt reassured knowing they were clinically monitored. However, gaps in information provided to patients seemed to cause anxiety for some participants. These concerns could be mitigated through clearer clinician communication and patient education at the time of prescription.
Ogaki, S.; Kaneda, M.; Nohara, T.; Fujita, S.; Osako, N.; Yagi, T.; Tomita, Y.; Ogata, T.
Show abstract
Study ObjectivesTo evaluate wearable sleep staging across sleep apnea severity, including very severe sleep apnea defined as an apnea-hypopnea index (AHI)[≥] 50 events/h, and to assess how training-set composition affects performance in this subgroup. MethodsWe analyzed 552 overnight recordings, 318 from the Sleep Lab Dataset and 234 from the Hospital Dataset. In the Hospital Dataset, 26.5% had very severe sleep apnea. We developed a deep learning model for sleep staging using RR intervals from wrist-worn photoplethysmography and three-axis accelerometry. Baseline performance was assessed by cross-validation under 5-stage and 4-stage staging. We examined night-level associations with AHI severity. We also compared the baseline model with an ablation model trained on the same number of recordings but with more Sleep Lab Dataset and lower-AHI Hospital Dataset recordings, evaluating both models in the very severe subgroup. ResultsIn 5-stage classification, Cohens kappa was 0.586 in the Sleep Lab Dataset and 0.446 in the Hospital Dataset. Under 4-stage staging, the gap narrowed, with kappa values of 0.632 and 0.525, respectively. In the Hospital Dataset, performance declined with increasing AHI severity. Among 62 recordings with very severe sleep apnea, reducing high-AHI representation in training lowered kappa from 0.365 to 0.303. ConclusionsWearable sleep staging performance declined across greater sleep apnea severity in this clinical cohort. Clinical utility may benefit from training data that better represent the target severity spectrum and from selecting staging granularity to match the intended use case. Statement of SignificanceRepeated laboratory polysomnography is impractical for long-term sleep apnea management. Wearable sleep staging could support scalable monitoring, yet its reliability in clinically severe sleep apnea has remained unclear. This study developed and evaluated a wearable sleep staging approach in both sleep-laboratory and hospital cohorts. The hospital cohort included many severe and very severe cases. Performance was lower in the hospital cohort and declined with greater sleep apnea severity. A coarser staging scheme reduced the gap between cohorts, and models trained without representative very severe cases performed worse in this target population. These findings highlight the value of severity-aware model development and motivate future multi-night home validation with reliability cues.
Fonseca, P.; Ross, M.; van Meulen, F.; Asin, J.; van Gilst, M. M.; Overeem, S.
Show abstract
ObjectiveLong term monitoring of obstructive sleep apnea (OSA) severity may be relevant for several clinical applications. We developed a method for estimating the apnea-hypopnea index (AHI) using wrist-worn, reflective photoplethysmography (PPG). ApproachA neural network was developed to detect respiratory events using PPG and PPG-derived sleep stages as input. The development database encompassed retrospective data from three polysomnographic datasets (N=3111), including a dataset with concurrent reflective PPG recordings from a wrist-worn device (N=969). The model was pre-trained with (transmissive) finger-PPG signals from all overnight recordings and then fine-tuned to wrist-PPG characteristics using transfer learning. Validation was performed on the test portion of the development set and on a fourth, external hold-out dataset containing both wrist-PPG and PSG data (N=171). Performance was evaluated in terms of AHI estimation accuracy and OSA severity classification. Main ResultsThe fine-tuned wrist-PPG model demonstrated strong agreement with the PSG-derived gold-standard AHI, achieving intra-class correlation coefficients of 0.87 in the test portion of the development set and 0.91 in the external hold-out validation set. Diagnostic performance was high, with accuracies above 80% for all severity thresholds. SignificanceThe study highlights the potential of reflective PPG-based AHI estimation, achieving high estimation performance in comparison with PSG. These measurements can be performed with relatively comfortable sensors integrated in convenient wrist-worn wearables, enabling long-term assessment of sleep disordered breathing, both in a diagnostic phase, and during therapy follow-up.
Atzenhoefer, M.; Nelson, B.; Atzenhoefer, T. E.; Staudacher, M.; Boxwala, H.; Iqbal, F. M.
Show abstract
Aims: Responses to remote pulmonary artery pressure data vary across programs. We evaluated SMART-HF, a structured pulmonary artery diastolic pressure (PAD)-guided workflow, in a community heart failure cohort. Methods: We retrospectively analysed adults with heart failure and an implanted pulmonary artery pressure sensor managed with SMART-HF. Pulmonary artery diastolic pressure (PAD) was calculated from prespecified 14-day windows at baseline, 90 days, and 6 months. Two hemodynamic management performance indices (HMPI) were prespecified: the 6-Month Delta HMPI (PAD reduction >2 mmHg from baseline) and the 90-Day Target HMPI (PAD [≤]20 mmHg at 90 days). Exploratory analyses evaluated patients with baseline PAD >20 mmHg. Results: Of 37 patients, 36 had paired 90-day and 29 had paired 6-month windows. Mean PAD decreased from 18.3 +/- 7.0 to 16.1 +/- 6.3 mmHg at 90 days and from 18.8 +/- 6.8 to 15.5 +/- 5.8 mmHg at 6 months (both P < 0.001). The 90-Day Target HMPI was achieved in 26/36 (72.2%) and the 6-Month Delta HMPI in 19/29 (65.5%) [95% CI 45.7-82.1]. In the exploratory subgroup (baseline PAD >20 mmHg), mean PAD changes were -2.9 +/- 3.6 mmHg at 90 days (n = 19; P = 0.002) and -4.9 +/- 4.9 mmHg at 6 months (n = 15; P = 0.002). Conclusions: SMART-HF was associated with improved ambulatory pulmonary artery diastolic pressure control at 90 days and 6 months. Exploratory subgroup findings support further evaluation in patients with elevated baseline pulmonary artery diastolic pressure.
Hamida, H. B.; El Ouaer, M.; Abdelmoula, S.; El Ghali, M.; Bizid, M.; Chamtouri, I.; Monastiri, K.
Show abstract
BackgroundPatent ductus arteriosus (PDA) is a common and potentially serious cardiovascular condition in preterm infants, particularly those with low gestational age and birth weight. Its management remains controversial due to variability in screening, diagnostic criteria, and treatment strategies. This study aimed to evaluate risk factors, outcomes, and management strategies for PDA in preterm infants, and to identify predictors of clinical and echocardiographic response to therapy. MethodsWe conducted a retrospective cohort study over a 4-year period (2016-2019) in the neonatal intensive care unit (NICU) of a tertiary care center. All consecutive preterm infants admitted during the study period were eligible. Infants with echocardiographically confirmed PDA who received pharmacological treatment with intravenous paracetamol or ibuprofen were included in the analysis. Missing data were minimal and handled using available-case analysis. Statistical analyses included descriptive statistics, Pearsons chi-square test, and multivariable logistic regression. ResultsAmong 2154 preterm infants admitted to the NICU, 60 were diagnosed with PDA (incidence : 2.8%). The mean gestational age was 29 {+/-} 2.6 weeks, and the median birth weight was 1200 g. Respiratory distress occurred in 95% of cases, mainly due to hyaline membrane disease (86.7%). PDA was symptomatic in 80% of infants. First-line treatment resulted in clinical improvement in 77% and ductal closure in 83.3% of cases, most within 3 days. Predictors of successful closure included gestational age [≥] 28 weeks (OR = 5.9; 95% CI : 1.7-20.2) and antenatal corticosteroid exposure (OR = 1.2; 95% CI : 1.0-1.6). Overall mortality was 35% and was significantly higher in infants < 28 weeks (OR = 5.0; 95% CI : 2.4-10.3). Clinical improvement (OR = 3.7) and echocardiographic closure (OR = 4.5) after first-line treatment were associated with reduced mortality. ConclusionsPDA in preterm infants is associated with substantial morbidity and mortality, particularly in those born before 28 weeks of gestation. Early diagnosis, antenatal corticosteroid exposure, and timely pharmacological treatment may improve outcomes. Systematic echocardiographic screening in high-risk neonates should be considered.
Thomas, C.; Kim, J. Y.; Hasan, A.; Kpodzro, S.; Cortes, J.; Day, B.; Jensen, S.; LHuillier, S.; Oden, M. O.; Zumbado Segura, S.; Maurer, E. W.; Tucker, S.; Robinson, S.; Garcia, B.; Muramalla, E.; Lu, S.; Chawla, N.; Patel, M.; Balu, S.; Sendak, M.
Show abstract
Safety net healthcare delivery organizations (SNOs) serve vulnerable populations but face persistent challenges in adopting new technologies, including AI. While systematic barriers to technology adoption in SNOs are well documented, little is known about how AI is implemented in these settings. This study explored real-world AI adoption in SNOs, focusing on identifying barriers encountered across the AI lifecycle and strategies used to overcome them. Five SNOs in the U.S. participated in a 12-month technical assistance program, the Practice Network, to implement AI tools of their choosing. Observed barriers and mitigation strategies were documented throughout program activities and, at the conclusion of the program, reviewed and refined with participants using a participatory research approach to ensure findings reflected lived experiences and organizational contexts. Key barriers emerged during the Integration and Lifecycle Management phases and included gaps in AI performance evaluation and impact assessments, communication with patients about AI use, foundational AI education, financial resources for purchasing and maintaining AI tools, and AI governance structures. Effective strategies for addressing these barriers were primarily supported through centralized expertise, structured guidance, and peer learning. These findings provide granular, actionable insights for SNO leaders, offering guidance for anticipating barriers and proactively planning mitigation strategies. By including SNO perspectives, the study also contributes to the broader health AI ecosystem and underscores the importance of participatory, collaborative approaches to support safe, effective, and ethical AI adoption in resource-constrained settings. Author SummarySafety net organizations (SNOs) are healthcare systems that primarily serve low-income and underinsured patients. While interest in artificial intelligence (AI) in healthcare has grown rapidly, little is known about how these organizations experience AI adoption in practice. In this study, we partnered with five SNOs over a 12-month program to document the challenges they encountered when implementing AI tools and the strategies they used to address them. We worked closely with SNO staff throughout the process to ensure our findings reflected their lived experiences with AI implementation. We found that the most common challenges arose when organizations tried to integrate AI into daily operations and monitor and maintain those tools over time. Specific barriers included difficulty evaluating whether AI was performing as expected, limited guidance on communicating with patients about AI use, a lack of resources for staff training, limited financial resources, and the absence of formal governance structures. Successful strategies for overcoming these challenges drew on shared knowledge and structured support provided by the program, as well as learning from peer organizations. These findings offer practical guidance for SNO leaders planning or managing AI adoption, and contribute to a broader conversation about what is required to implement AI safely and effectively in healthcare settings that serve the most medically and socially vulnerable patients.
Chowdhury, A.; Irtiza, A.
Show abstract
Background: The urgent care departments in Europe face a structural paradox: accelerating digitalisation is accompanied by a patient population that is disproportionately unable to engage with standard digital tools. An internal analysis at the Emergency Department (Akutafdelingen) of Nordsjaellands Hospital in Hilleroed, Denmark found that 43% of emergency patients struggle with digital solutions - a figure that reflects the predictable composition of acute care populations rather than any individual failing. Objective: This paper presents the design, iterative development, and secondary validation of the ED Adaptive Interface (v5): a prototype adaptive patient terminal developed in response to this challenge. The system operationalises what the author terms impairment-first design - a methodology that treats the most constrained patient experience as the primary design problem and derives the standard experience as a subset. The interface configures itself in under ten seconds via nurse-led setup, adapting across four axes of impairment: visual, motor, speech, and cognitive. System: Version 4 supports five accessibility modes, a heatmap pain assessment grid, a Privacy and Dignity panel, a live workflow tracker with care notifications, structured dual-category help requests, and plain-language medical term definitions across four languages. Version 5, reported here for the first time, introduces a Condition Worsening Escalation button, a Referral Pathway Display, a "Why Am I Waiting?" triage explainer, a Symptom Progression Log, MinSP/Yellow Card Scan simulation, expanded language support (seven languages: English, Danish, Arabic with full RTL layout, Turkish, Romanian, Polish, and Somali), and an expanded ten-item Communication Board. The entire system runs as a single 79-kilobyte HTML file with zero infrastructure requirements. Methods: To base the design on patient-generated evidence, two independent social media threads were subjected to an inductive thematic analysis (Braun and Clarke, 2006): a primary corpus of 83 entries in the Facebook group Foreigners in Denmark (collected March 2026) and a corroborating corpus in an international community group in the Aarhus region (collected April 2026). All identifiers in both datasets were fully anonymised under GDPR Article 89 research provisions prior to analysis. No participants were contacted. Generative AI tools were used to assist with drafting, writing, and prototype code development; all scientific content, data collection, analysis, and conclusions are the sole responsibility of the authors. Results: The first discourse corpus produced five major themes corresponding to the five problem areas the prototype was designed to address: system navigation and triage literacy gaps (31 entries); language and cultural barriers (6 entries); communication failures during care (5 entries); staff overload and capacity constraints (8 entries); and pain and severity assessment failures (14 entries). The corroborating dataset supported all five themes and introduced two additional themes: differential treatment of international patients and medical gaslighting as a long-term pattern of patient advocacy failure. One structural finding - the five most-liked comments incorrectly criticised the original poster for self-referring when she had received explicit 1813 telephone triage approval - directly inspired the Referral Pathway Display and "Why Am I Waiting?" features in v5. Conclusions: The convergence of design rationale and independent social evidence across all five problem categories suggests that impairment-first design is not a niche accessibility concern but a structural approach to healthcare interface quality. The prototype is ready for a structured clinical pilot using the System Usability Scale (SUS) and semi-structured staff interviews. The long-term roadmap includes full MinSP integration, hospital PMS connectivity, and clinical validation.
Schwoebel, J.; Frasch, M.; Spalding, A.; Sewell, E.; Englert, P.; Halpert, B.; Overbay, C.; Semenec, I.; Shor, J.
Show abstract
As health systems begin deploying autonomous AI agents that make independent clinical decisions and take direct actions within care workflows, ensuring patient safety and care quality requires governance standards that go beyond existing medical device frameworks designed for human-in-the-loop prediction tools. This paper introduces the Healthcare AI Agents Regulatory Framework (HAARF), a comprehensive verification standard for autonomous AI systems in clinical environments, developed collaboratively with 40+ international experts spanning regulatory authorities, clinical organizations, and AI security specialists. HAARF synthesizes requirements from nine major regulatory frameworks (FDA, EU AI Act, Health Canada, UK MHRA, NIST AI RMF, WHO GI-AI4H, ISO/IEC 42001, OWASP AISVS, IMDRF GMLP) into eight core verification categories comprising 279 specific requirements across three risk-based implementation levels. The framework addresses critical gaps in health system readiness for autonomous AI including: (1) progressive autonomy governance with clinical accountability, (2) tool-use security for agents that independently access EHRs, medical devices, and clinical systems, (3) continuous equity monitoring and bias mitigation across diverse patient populations, and (4) clinical decision traceability preserving human oversight authority. We validate HAARFs enforcement capabilities through a scenario-based red-team evaluation comprising six adversarial scenarios executed under baseline (no middleware) and HAARF- guardrailed conditions (N = 50 trials each, Gemini 2.5 Flash primary with Claude Sonnet 4.6 cross-model validation). In baseline conditions, the agent model executes unauthorized tools in 56-60% of adversarial trials. Under the HAARF condition, deterministic middleware enforcement reduces the unauthorized-tool success rate to 0%, with 0% contraindication misses and 0% policy-injection success (95% Wilson CI [0.00, 0.07]). Cross-model validation confirms identical security metrics, supporting HAARFs model-agnostic design. Mapping analysis demonstrates 48-88% coverage of major regulatory frameworks, with per-category FDA alignment ranging from 73% (C5, Agent Registration) to 91% (C3, Cybersecurity; C7, Bias & Equity). Initial validation with healthcare organizations shows a 40-60% reduction in multi-jurisdictional compliance burden and improved clinical safety governance outcomes. HAARF provides health systems with a practical, risk-stratified pathway for safe AI agent deployment--shifting from reactive compliance to proactive quality governance while maintaining rigorous patient safety standards and human-centered care principles.
Sumner, S. F.; Sakita, F. M.; Haukila, K. F.; Wanda, L.; Kweka, G. L.; Mlangi, J. J.; Shayo, P.; Tarimo, T. G.; Khanna, S.; Wang, C.; Pyne, A.; Manavalan, P.; Thielman, N. M.; Bettger, J. P.; Hertz, J. T.
Show abstract
Acute myocardial infarction (AMI) is an increasing cause of morbidity and mortality in Sub-Saharan Africa (SSA) but is often underdiagnosed and undertreated. To address this gap, the Multicomponent Intervention to Improve Myocardial Infarction Care (MIMIC) was developed and implemented in the emergency department (ED) of a regional referral center in northern Tanzania. We conducted in-depth interviews with 20 key stakeholders (physicians, nurses, administrators, and patients) who participated in MIMIC during the first year of implementation. Purposive sampling was used to recruit a broad range of participants. Interviews were guided by a semi-structured interview guide informed by the Theoretical Framework of Acceptability (TFA). Interview transcripts were thematically analyzed by a team of coders using an inductive, grounded theory approach guided by the seven TFA domains. Nineteen major themes emerged across all TFA domains. Overall, participants described MIMIC as highly acceptable, minimally burdensome, and well-aligned with professional and ethical values. Perceived effectiveness was most emphasized, with staff citing improvements in AMI recognition, ECG and troponin testing, and use of evidence-based therapies. All components were highlighted as effective and easily integrated into existing workflows. Patients valued the educational pamphlet for improving knowledge and self-efficacy, though staff expressed concerns about distributing it during acute care, contributing to inconsistent delivery. Champions were viewed as key in promoting adherence and sustaining implementation of the intervention. MIMIC was widely acceptable in all seven TFA domains among ED providers and patients, with perceived effectiveness driving positive attitudes across stakeholder groups. Use of a co-design approach in MIMIC development likely contributed to high intervention acceptability. Patient education strategies may require adaptation to improve fidelity. These findings suggest that continued implementation and future adaptation of MIMIC may be feasible.
Liu, J.; Fan, J.; Deng, Z.; Tang, X.; Zhang, H.; Sharma, A.; Li, Q.; Liang, C.; Wang, A. Y.; Liu, L.; Luo, K.; Liu, H.; Qiu, H.
Show abstract
Background: Patient-ventilator synchrony, an essential prerequisite for non-invasive mechanical ventilation, requires an accurate matching of every phase of the respiration between patient and the ventilator. Methods: We developed a long short-term memory (LSTM)-based model that can predict the inspiratory and expiratory time of the patient. This model consisted of two hidden layers, each with eight LSTM units, and was trained using a dataset of approximately 27000 of 500-ms-long flow signals that captured both inspiratory and expiratory events. Results: The LSTM model achieved 97% accuracy and F1 score in the test data, and the average trigger error was less than 2.20%. In the first trial, 10 volunteers were enrolled. In "Compliance" mode, 78.6% of the triggering by the LSTM model was compatible with neuronal respiration, which was higher than Auto-Trak model (74.2%). Auto-Trak model performed marginally better in the modes of pressure support = 5 and 10 cmH2O. Considering the success in the first clinical trial, we further tested the models by including five patients with acute respiratory distress syndrome (ARDS). The LSTM model exhibited 60.6% of the triggering in the 33%-box, which is better than 49.0% of Auto-Trak model. And the PVI index of the LSTM model was significantly less than Auto-Trak model (36.5% vs 52.9%). Conclusions: Overall, the LSTM model performed comparable to, or even better than, Auto-Trak model in both latency and PVI index. While other mathematical models have been developed, our model was effectively embedded in the chip to control the triggering of ventilator. Trial registration: Approval Number: 2023ZDSYLL348-P01; Approval Date: 28/09/2023. Clinical Trial Registration Number: ChiCTR2500097446; Registration Date: 19/02/2025.
Dai, H.-J.; Mir, T. H.; Fang, L.-C.; Chen, C.-T.; Feng, H.-H.; Lai, J.-R.; Hsu, H.-C.; Nandy, P.; Panchal, O.; Liao, W.-H.; Tien, Y.-Z.; Chen, P.-Z.; Lin, Y.-R.; Jonnagaddala, J.
Show abstract
Accurate recognition and deidentification of sensitive health information (SHI) in spoken dialogues requires multimodal algorithms that can understand medical language and contextual nuance. However, the recognition and deidentification risks expose sensitive health information (SHI). Additionally, the variability and complexity of medical terminology, along with the inherent biases in medical datasets, further complicate this task. This study introduces the SREDH/AI-Cup 2025 Medical Speech Sensitive Information Recognition Challenge, which focuses on two tasks: Task-1: Speech transcription systems must accurately transcribe speech into text; and Task-2: Medical speech de-identification to detect and appropriately classify mentions of SHI. The competition attracted 246 teams; top-performing systems achieved a mixed error rate (MER) of 0.1147 and a macro F1-score of 0.7103, with average MER and macro F1-score of 0.3539 and 0.2696, respectively. Results were presented at the IW-DMRN workshop in 2025. Notably, the results reveal that LLMs were prevalent across both tasks: 97.5% of teams adopted LLMs for Task 1 and 100% for Task 2. Highlighting their growing role in healthcare. Furthermore, we finetuned six models, demonstrating strong precision ([~]0.885-0.889) with slightly lower recall ([~]0.830-0.847), resulting in F1-scores of 0.857-0.867.
Villar-Valero, J.; Nebot, L.; Soto-Iglesias, D.; Falasconi, G.; Berruezo, A.; Boukens, B. J. D.; Trenor, B.; Gomez, J. F.
Show abstract
BackgroundSympathetic modulation via the stellate ganglia is increasingly recognized as a contributor to ventricular arrhythmogenesis after myocardial infarction. However, the mechanisms by which autonomic remodeling interacts with chronic infarct substrates to shape arrhythmic vulnerability remain incompletely understood. ObjectivesTo test the hypothesis that left- and right-sided stellate ganglion-mediated SNS modulation differentially reshapes ventricular arrhythmic vulnerability in chronic post-infarcted substrates, and that the RVI detects changes in vulnerability beyond conventional stimulation-based inducibility. MethodsFourteen patient-specific ventricular models with chronic post-infarcted remodeling were reconstructed from imaging data. A total of 336 simulations were performed under different combinations of stellate ganglion modulation, border zone remodeling, and fibroblast density. Arrhythmic vulnerability was quantified using 3D RVI mapping during paced rhythms and compared with conventional stimulation-based inducibility outcomes. ResultsStellate ganglion modulation induced marked, regionally heterogeneous changes in repolarization timing, resulting in lower and more negative RVI values in vulnerable regions. More negative RVI values reflect increased propensity for wavefront-waveback interaction and reentry initiation. Across the cohort, stellate modulation consistently decreased RVImin, even when inducibility outcomes remained unchanged. These findings indicate that SNS modulation can create a substrate more permissive to reentry independently of whether ventricular arrhythmia is triggered during programmed stimulation. ConclusionsStellate ganglion-mediated sympathetic modulation dynamically reshapes ventricular arrhythmic vulnerability in chronic post-infarcted substrates. RVI provides a spatially resolved, vulnerability-based metric that complements inducibility testing by revealing autonomic-substrate interactions underlying arrhythmogenesis Condensed AbstractSympathetic modulation via the stellate ganglia can alter ventricular repolarization and promote arrhythmogenesis after myocardial infarction, yet clinical responses remain heterogeneous. Using 14 patient-specific post-infarction ventricular models, we simulated left- and right-sided stellate modulation across combinations of border zone remodeling and fibrosis (336 simulations). Stellate modulation induced regionally heterogeneous repolarization shortening and reduced RVI values, even when programmed stimulation inducibility remained unchanged. These findings suggest that RVI captures substrate-level vulnerability beyond binary induction testing and may improve mechanistic assessment of autonomic-substrate interactions in chronic infarct substrates.
Yamasaki, F.; Seike, M.; Hirota, T.; Sato, T.
Show abstract
Background: Deep brain stimulation (DBS) is a treatment option for Parkinson disease (PD). However, the effect of DBS on the arterial pressure (AP) remains unexplored. We aimed to develop an artificial baroreflex system for treating orthostatic hypotension (OH) due to central baroreflex failure in patients with PD. To achieve this, we developed an appropriate algorithm after estimating the dynamic responses of the AP to DBS using a white noise system identification method. Methods: We randomly performed DBS while measuring the AP tonometrically in 3 trials involving 3 patients with PD treated with DBS. We calculated the frequency response of the AP to the DBS using a fast Fourier transform algorithm. Finally, the feedback correction factors were determined via numerical simulation. Results: The frequency responses of the systolic AP to random DBS were identifiable in all 3 trials, and the steady state gain was 8.24 mmHg/STM. Based on these results, the proportional correction factor was set to 0.12, and the integral correction factor was set to 0.018. The computer simulation revealed that the system could quickly and effectively attenuate a sudden AP drop induced by external disturbances such as head-up tilting. Conclusion: An artificial baroreflex system with DBS may be a novel therapeutic approach for OH caused by central baroreflex failure.
Sato, T.; Ishiseki, M.; Kataoka, Y.; Someko, H.; Sato, H.; Minami, K.; Kaneko, T.; Takeda, H.; Crosby, A.
Show abstract
ObjectivesAlarm fatigue is a patient safety concern in ICUs, yet no validated instrument exists to assess alarm fatigue among healthcare professionals in non-Western settings. This study aimed to cross-culturally adapt the Charite Alarm Fatigue Questionnaire (CAFQa) into Japanese and evaluate its reliability and validity among ICU nurses and physicians. MethodsThe Japanese CAFQa was cross-culturally adapted following the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) guidelines, including forward translation, back-translation, expert panel review, and cognitive interviews. A multicenter cross-sectional validation study was performed across eight ICUs at five hospitals in Japan. A total of 129 participants (103 nurses and 26 physicians) completed the Japanese CAFQa, the NIOSH Brief Job Stress Questionnaire, and the Insomnia Severity Index (ISI). Structural validity, internal consistency, test-retest reliability (n = 102), convergent validity, and known-groups validity were assessed. ResultsCFA confirmed the two-factor structure with acceptable fit (CFI = 0.922, RMSEA = 0.041, SRMR = 0.076), with standardized factor loadings ranging from 0.33 to 0.82. The two factors were not correlated (r = 0.05). Cronbachs alpha was 0.688 for the overall scale, 0.805 for Alarm Stress, and 0.649 for Alarm Coping. Test-retest ICCs ranged from 0.616 to 0.753. The CAFQa total score correlated with the NIOSH total (r = 0.261) and the ISI total (r = 0.338). Healthcare professionals with [≥]4 years of ICU experience had higher Alarm Coping scores than those with 1-3 years (median 7.0 vs 6.5), and physicians scored higher on Alarm Coping than nurses (median 8.0 vs 7.0). ConclusionsThe Japanese CAFQa demonstrated acceptable structural validity, reliability, and convergent and known-groups validity, providing the first validated tool for quantitatively measuring alarm fatigue in Japan. Implications for Clinical PracticeThe Japanese CAFQa enables ICU managers to quantify alarm fatigue at individual and unit levels, identify high-risk staff, and evaluate the effectiveness of alarm management interventions.
Undurraga Lucero, J. A.; Chesnaye, M.; Simpson, D.; Laugesen, S.
Show abstract
Objective detection of evoked potentials (EPs) is central to digital diagnostics in hearing assessment and clinical neurophysiology, yet current approaches remain time-intensive and sensitive to inter-individual noise variability. Many existing detection methods rely on population-based assumptions or computationally demanding procedures, limiting robustness and efficiency in real-world clinical settings. We present Fmpi, a digital EP detection framework enabling individualised, real-time response detection through analytical modelling of the spectral colour and temporal dynamics of background noise within each recording. Using extensive simulations and large-scale human electroencephalography datasets spanning brainstem, steady-state, and cortical EPs recorded in adults and infants, we demonstrate performance comparable or superior to state-of-the-art bootstrapped methods while operating at a fraction of the computational cost and maintaining well-controlled sensitivity with improved specificity. Importantly, Fmpi incorporates a futility detection mechanism enabling early termination of uninformative recordings, reducing testing time without compromising diagnostic reliability.
Ramirez-Lopez, L.; Kang, P.
Show abstract
Irritable Bowel Syndrome (IBS) affects a substantial proportion of university students, yet its factors remain incompletely characterised in South Asian populations. We reanalysed a publicly available dataset of 550 Bangladeshi students from Hasan et al. (2025), conducting a data audit that identified implausible records, including males reporting menstrual symptoms, and reduced the analytic sample to 506 observations. Using Explainable Boosting Machines (EBMs), which capture non-linear effects and pairwise interactions without sacrificing interpretability, we found that psychological distress, elevated BMI and academic dissatisfaction were the strongest predictors of IBS (mean AUC = 0.852 across 100 stratified train-test splits). Critically, several findings diverged from the original logistic regression analysis. Physical activity showed a non-linear risk pattern only at high intensity, the association with gender was substantially weaker when we accounted for metabolic and psychological factors as well and malnourishment does not have a strong an impact as in the original study. These divergences likely arise because the machine-learning model captures non-linear effects and interactions that were not represented in the original regression specification. Our findings underscore the value of reanalysing existing datasets with methods suited to capturing complexity and highlight data quality verification as a necessary step in the secondary analysis.
Kritopoulos, G.; Neofotistos, G.; Barmparis, G. D.; Tsironis, G. P.
Show abstract
Class imbalance in clinical electrocardiogram (ECG) datasets limits the diagnostic sensitivity of automated arrhythmia classifiers, particularly for rare but clinically significant beat types. We propose a three-stage hybrid generative pipeline that combines a spectral-guided conditional Variational Autoencoder (cVAE), a class-conditional latent Denoising Diffusion Probabilistic Model (DDPM), and a Quantum Latent Refinement (QLR) module built on parameterized quantum circuits to augment minority arrhythmia classes in the MIT-BIH Arrhythmia Database. The QLR module applies a bounded residual correction guided by Maximum Mean Discrepancy minimization to align synthetic latent distributions with real class-specific latent banks. A lightweight 1D MobileNetV2 classifier evaluated over five independent random seeds and four augmentation ratios serves as the downstream benchmark. Our findings establish latent diffusion augmentation as an effective strategy for imbalanced ECG classification and motivate further investigation of quantum-classical hybrid methods in cardiac diagnostics.
Vikström, A.; Zarrinkoob, L.; Johannesdottir, M.; Wahlin, A.; Hellström, J.; Appelblad, M.; Holmlund, P.
Show abstract
Modelling of hemodynamics in the circle of Willis (CoW) depends on vascular segmentation, which may vary based on imaging modality. Computed tomography angiography (CTA) is commonly used in clinic but involves radiation and injection of contrast agents, whereas magnetic resonance angiography (MRA) offers a non-invasive alternative. This study aims to compare CoW morphology and modelled cerebral perfusion pressure of CTA and MRA segmentations, validating if MRA can replace CTA in modelling workflows. CTA and time-of-flight MRA (TOF-MRA) of the CoW was performed in 19 patients undergoing elective aortic arch surgery (67{+/-}7 years, 8 women). The CoW was semi-automatically segmented based on signal intensity thresholding. A TOF-MRA threshold was optimized against the CTA segmentation, using the CTA as reference standard. Computational fluid dynamics (CFD) modelling with boundary conditions based on subject-specific flow rates from 4D flow MRI simulated cerebral perfusion pressure in the segmented geometries. A baseline simulation and a unilateral brain inflow simulation, i.e., occlusion of a carotid, were carried out. Linear mixed models indicated there was no effect of choice of modality on either average arterial lumen area (CTA - TOF-MRA: -0.2{+/-}1.3 mm2; p=0.762) or baseline pressure drops (0.2{+/-}1.9 mmHg; p=0.257). In the unilateral inflow simulation, we found no difference in pressure laterality (-6.6{+/-}18.4 mmHg; p=0.185) or collateral flow rate (10{+/-}46 ml/min; p=0.421). TOF-MRA geometries can with signal intensity thresholding be matched to produce similar morphology and modelled cerebral perfusion pressure to CTA geometries. The modelled pressure drops over the collateral arteries were sensitive to the segmentation regardless of modality.